14 research outputs found

    Revisiting Classical Multiclass Linear Discriminant Analysis with a Novel Prototype-based Interpretable Solution

    Full text link
    Linear discriminant analysis (LDA) is a fundamental method for feature extraction and dimensionality reduction. Despite having many variants, classical LDA has its own importance, as it is a keystone in human knowledge about statistical pattern recognition. For a dataset containing C clusters, the classical solution to LDA extracts at most C-1 features. Here, we introduce a novel solution to classical LDA, called LDA++, that yields C features, each interpretable as measuring similarity to one cluster. This novel solution bridges dimensionality reduction and multiclass classification. Specifically, we prove that, for homoscedastic Gaussian data and under some mild conditions, the optimal weights of a linear multiclass classifier also make an optimal solution to LDA. In addition, we show that LDA++ reveals some important new facts about LDA that remarkably changes our understanding of classical multiclass LDA after 75 years of its introduction. We provide a complete numerical solution for LDA++ for the cases 1) when the scatter matrices can be constructed explicitly, 2) when constructing the scatter matrices is infeasible, and 3) the kernel extension

    A Pedagogically Sound yet Efficient Deletion algorithm for Red-Black Trees: The Parity-Seeking Delete Algorithm

    Full text link
    Red-black (RB) trees are one of the most efficient variants of balanced binary search trees. However, they have always been blamed for being too complicated, hard to explain, and not suitable for pedagogical purposes. Sedgewick (2008) proposed left-leaning red-black (LLRB) trees in which red links are restricted to left children, and proposed recursive concise insert and delete algorithms. However, the top-down deletion algorithm of LLRB is still very complicated and highly inefficient. In this paper, we first consider 2-3 red-black trees in which both children cannot be red. We propose a parity-seeking delete algorithm with the basic idea of making the deficient subtree on a par with its sibling: either by fixing the deficient subtree or by making the sibling deficient, as well, ascending deficiency to the parent node. This is the first pedagogically sound algorithm for the delete operation in red-black trees. Then, we amend our algorithm and propose a parity-seeking delete algorithm for classical RB trees. Our experiments show that, despite having more rotations, 2-3 RB trees are almost as efficient as RB trees and twice faster than LLRB trees. Besides, RB trees with the proposed parity-seeking delete algorithm have the same number of rotations and almost identical running time as the classic delete algorithm. While being extremely efficient, the proposed parity-seeking delete algorithm is easily understandable and suitable for pedagogical purposes

    Neural Generalization of Multiple Kernel Learning

    Full text link
    Multiple Kernel Learning is a conventional way to learn the kernel function in kernel-based methods. MKL algorithms enhance the performance of kernel methods. However, these methods have a lower complexity compared to deep learning models and are inferior to these models in terms of recognition accuracy. Deep learning models can learn complex functions by applying nonlinear transformations to data through several layers. In this paper, we show that a typical MKL algorithm can be interpreted as a one-layer neural network with linear activation functions. By this interpretation, we propose a Neural Generalization of Multiple Kernel Learning (NGMKL), which extends the conventional multiple kernel learning framework to a multi-layer neural network with nonlinear activation functions. Our experiments on several benchmarks show that the proposed method improves the complexity of MKL algorithms and leads to higher recognition accuracy
    corecore